Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 35
1.
Neural Netw ; 166: 379-395, 2023 Sep.
Article En | MEDLINE | ID: mdl-37549607

Support vector machines (SVMs) are powerful statistical learning tools, but their application to large datasets can cause time-consuming training complexity. To address this issue, various instance selection (IS) approaches have been proposed, which choose a small fraction of critical instances and screen out others before training. However, existing methods have not been able to balance accuracy and efficiency well. Some methods miss critical instances, while others use complicated selection schemes that require even more execution time than training with all original instances, thus violating the initial intention of IS. In this work, we present a newly developed IS method called Valid Border Recognition (VBR). VBR selects the closest heterogeneous neighbors as valid border instances and incorporates this process into the creation of a reduced Gaussian kernel matrix, thus minimizing the execution time. To improve reliability, we propose a strengthened version of VBR (SVBR). Based on VBR, SVBR gradually adds farther heterogeneous neighbors as complements until the Lagrange multipliers of already selected instances become stable. In numerical experiments, the effectiveness of our proposed methods is verified on benchmark and synthetic datasets in terms of accuracy, execution time and inference time.


Algorithms , Support Vector Machine , Reproducibility of Results
3.
Nat Commun ; 14(1): 2217, 2023 Apr 18.
Article En | MEDLINE | ID: mdl-37072418

Understanding diffusive processes in networks is a significant challenge in complexity science. Networks possess a diffusive potential that depends on their topological configuration, but diffusion also relies on the process and initial conditions. This article presents Diffusion Capacity, a concept that measures a node's potential to diffuse information based on a distance distribution that considers both geodesic and weighted shortest paths and dynamical features of the diffusion process. Diffusion Capacity thoroughly describes the role of individual nodes during a diffusion process and can identify structural modifications that may improve diffusion mechanisms. The article defines Diffusion Capacity for interconnected networks and introduces Relative Gain, which compares the performance of a node in a single structure versus an interconnected one. The method applies to a global climate network constructed from surface air temperature data, revealing a significant change in diffusion capacity around the year 2000, suggesting a loss of the planet's diffusion capacity that could contribute to the emergence of more frequent climatic events.

4.
Ann Math Artif Intell ; 91(2-3): 349-372, 2023.
Article En | MEDLINE | ID: mdl-36721866

In this paper, we investigate a novel physician scheduling problem in the Mobile Cabin Hospitals (MCH) which are constructed in Wuhan, China during the outbreak of the Covid-19 pandemic. The shortage of physicians and the surge of patients brought great challenges for physicians scheduling in MCH. The purpose of the studied problem is to get an approximately optimal schedule that reaches the minimum workload for physicians on the premise of satisfying the service requirements of patients as much as possible. We propose a novel hybrid algorithm integrating particle swarm optimization (PSO) and variable neighborhood descent (VND) (named as PSO-VND) to find the approximate global optimal solution. A self-adaptive mechanism is developed to choose the updating operators dynamically during the procedures. Based on the special features of the problem, three neighborhood structures are designed and searched in VND to improve the solution. The experimental comparisons show that the proposed PSO-VND has a significant performance increase than the other competitors.

5.
IEEE Trans Cybern ; 53(7): 4619-4629, 2023 Jul.
Article En | MEDLINE | ID: mdl-34910659

Realistic epidemic spreading is usually driven by traffic flow in networks, which is not captured in classic diffusion models. Moreover, the progress of a node's infection from mild to severe phase has not been particularly addressed in previous epidemic modeling. To address these issues, we propose a novel traffic-driven epidemic spreading model by introducing a new epidemic state, that is, the severe state, which characterizes the serious infection of a node different from the initial mild infection. We derive the dynamic equations of our model with the tools of individual-based mean-field approximation and continuous-time Markov chain. We find that, besides infection and recovery rates, the epidemic threshold of our model is determined by the largest real eigenvalue of a communication frequency matrix we construct. Finally, we study how the epidemic spreading is influenced by representative distributions of infection control resources. In particular, we observe that the uniform and Weibull distributions of control resources, which have very close performance, are much better than the Pareto distribution in suppressing the epidemic spreading.


Epidemics , Markov Chains , Communication , Diffusion
7.
Ann Oper Res ; 316(1): 699-721, 2022.
Article En | MEDLINE | ID: mdl-35531563

Global vaccine revenues are projected at $59.2 billion, yet large-scale vaccine distribution remains challenging for many diseases in countries around the world. Poor management of the vaccine supply chain can lead to a disease outbreak, or at worst, a pandemic. Fortunately, a large number of those challenges, such as decision-making for optimal allocation of resources, vaccination strategy, inventory management, among others, can be improved through optimization approaches. This work aims to understand how optimization has been applied to vaccine supply chain and logistics. To achieve this, we conducted a rapid review and searched for peer-reviewed journal articles, published between 2009 and March 2020, in four scientific databases. The search resulted in 345 articles, of which 25 unique studies met our inclusion criteria. Our analysis focused on the identification of article characteristics such as research objectives, vaccine supply chain stage addressed, the optimization method used, whether outbreak scenarios were considered, among others. Approximately 64% of the studies dealt with vaccination strategy, and the remainder dealt with logistics and inventory management. Only one addressed market competition (4%). There were 14 different types of optimization methods used, but control theory, linear programming, mathematical model and mixed integer programming were the most common (12% each). Uncertainties were considered in the models of 44% of the studies. One resulting observation was the lack of studies using optimization for vaccine inventory management and logistics. The results provide an understanding of how optimization models have been used to address challenges in large-scale vaccine supply chains.

8.
Comput Intell Neurosci ; 2022: 5699472, 2022.
Article En | MEDLINE | ID: mdl-35535198

Human Learning Optimization (HLO) is an efficient metaheuristic algorithm in which three learning operators, i.e., the random learning operator, the individual learning operator, and the social learning operator, are developed to search for optima by mimicking the learning behaviors of humans. In fact, people not only learn from global optimization but also learn from the best solution of other individuals in the real life, and the operators of Differential Evolution are updated based on the optima of other individuals. Inspired by these facts, this paper proposes two novel differential human learning optimization algorithms (DEHLOs), into which the Differential Evolution strategy is introduced to enhance the optimization ability of the algorithm. And the two optimization algorithms, based on improving the HLO from individual and population, are named DEHLO1 and DEHLO2, respectively. The multidimensional knapsack problems are adopted as benchmark problems to validate the performance of DEHLOs, and the results are compared with the standard HLO and Modified Binary Differential Evolution (MBDE) as well as other state-of-the-art metaheuristics. The experimental results demonstrate that the developed DEHLOs significantly outperform other algorithms and the DEHLO2 achieves the best overall performance on various problems.


Algorithms , Humans
9.
Patterns (N Y) ; 1(1): 100003, 2020 Apr 10.
Article En | MEDLINE | ID: mdl-33205080

Traditionally, networks have been studied in an independent fashion. With the emergence of novel smart city technologies, coupling among networks has been strengthened. To capture the ever-increasing coupling, we explain the notion of interdependent networks, i.e., multi-layered networks with shared decision-making entities, and shared sensing infrastructures with interdisciplinary applications. The main challenge is how to develop data analytics solutions that are capable of enabling interdependent decision making. One of the emerging solutions is agent-based distributed decision making among heterogeneous agents and entities when their decisions are affected by multiple networks. We first provide a big picture of real-world interdependent networks in the context of smart city infrastructures. We then provide an outline of potential challenges and solutions from a data science perspective. We discuss potential hindrances to ensure reliable communication among intelligent agents from different networks. We explore future research directions at the intersection of network science and data science.

10.
IEEE Trans Cybern ; 50(5): 2274-2287, 2020 May.
Article En | MEDLINE | ID: mdl-30530345

Over the last few decades, the decomposition-based multiobjective evolutionary algorithms (DMOEAs) have became one of the mainstreams for multiobjective optimization. However, there is not too much research on applying DMOEAs to uncertain problems until now. Usually, the uncertainty is modeled as additive noise in the objective space, which is the case this paper concentrates on. This paper first carries out experiments to examine the impact of noisy environments on DMOEAs. Then, four noise-handling techniques based upon the analyses of empirical results are proposed. First, a Pareto-based nadir point estimation strategy is put forward to provide a good normalization of each objective. Next, we introduce two adaptive sampling strategies that vary the number of samples used per solution based on the differences among neighboring solutions and their variance to control the tradeoff between exploration and exploitation. Finally, a mixed objective evaluation strategy and a mixed repair mechanism are proposed to alleviate the effects of noise and remedy the loss of diversity in the decision space, respectively. These features are embedded in two popular DMOEAs (i.e., MOEA/D and DMOEA- [Formula: see text]), and DMOEAs with these features are called noise-tolerant DMOEAs (NT-DMOEAs). NT-DMOEAs are compared with their various variants and four noise-tolerant multiobjective algorithms, including the improved NSGA-II, the classical algorithm Bayesian (1+1)-ES (BES), and the state-of-the-art algorithms MOP-EA and rolling tide evolutionary algorithm to show the superiority of proposed features on 17 benchmark problems with different strength levels of noise. Experimental studies demonstrate that two NT-DMOEAs, especially NT-DMOEA- [Formula: see text], show remarkable advantages over competitors in the majority of test instances.

11.
Sci Rep ; 9(1): 4511, 2019 03 14.
Article En | MEDLINE | ID: mdl-30872604

Diversity, understood as the variety of different elements or configurations that an extensive system has, is a crucial property that allows maintaining the system's functionality in a changing environment, where failures, random events or malicious attacks are often unavoidable. Despite the relevance of preserving diversity in the context of ecology, biology, transport, finances, etc., the elements or configurations that more contribute to the diversity are often unknown, and thus, they can not be protected against failures or environmental crises. This is due to the fact that there is no generic framework that allows identifying which elements or configurations have crucial roles in preserving the diversity of the system. Existing methods treat the level of heterogeneity of a system as a measure of its diversity, being unsuitable when systems are composed of a large number of elements with different attributes and types of interactions. Besides, with limited resources, one needs to find the best preservation policy, i.e., one needs to solve an optimization problem. Here we aim to bridge this gap by developing a metric between labeled graphs to compute the diversity of the system, which allows identifying the most relevant components, based on their contribution to a global diversity value. The proposed framework is suitable for large multiplex structures, which are constituted by a set of elements represented as nodes, which have different types of interactions, represented as layers. The proposed method allows us to find, in a genetic network (HIV-1), the elements with the highest diversity values, while in a European airline network, we systematically identify the companies that maximize (and those that less compromise) the variety of options for routes connecting different airports.

12.
Environ Sci Pollut Res Int ; 26(18): 17918-17926, 2019 Jun.
Article En | MEDLINE | ID: mdl-29238924

This paper shifts the discussion of low-carbon technology from science to the economy, especially the reactions of a manufacturer to government regulations. One major concern in this paper is uncertainty about the effects of government regulation on the manufacturing industry. On the trust side, will manufacturers trust the government's commitment to strictly supervise carbon emission reduction? Will a manufacturer that is involved in traditional industry consciously follow a low-carbon policy? On the profit side, does equilibrium between a manufacturer and a government exist on deciding which strategy to undertake to meet a profit maximization objective under carbon emission reduction? To identify the best solutions to these problems, this paper estimates the economic benefits of manufacturers associated with policy regulations in a low-carbon technology market. The problem of an interest conflict between the government and the manufacturer is formalized as a game theoretic model, and a mixed strategy Nash equilibrium is derived and analyzed. The experiment results indicate that when the punishment levied on the manufacturer or the loss to the government is sizable, the manufacturer will be prone to developing innovative technology and the government will be unlikely to supervise the manufacturer.


Carbon , Environmental Pollution/legislation & jurisprudence , Government Regulation , Manufacturing Industry/legislation & jurisprudence , China , Decision Making , Technology
13.
Expert Syst ; 36(5)2019 Oct.
Article En | MEDLINE | ID: mdl-33162636

In this paper, the problem of mining complex temporal patterns in the context of multivariate time series is considered. A new method called the Fast Temporal Pattern Mining with Extended Vertical Lists is introduced. The method is based on an extension of the level-wise property, which requires a more complex pattern to start at positions within a record where all of the subpatterns of the pattern start. The approach is built around a novel data structure called the Extended Vertical List that tracks positions of the first state of the pattern inside records and links them to appropriate positions of a specific subpattern of the pattern called the prefix. Extensive computational results indicate that the new method performs significantly faster than the previous version of the algorithm for Temporal Pattern Mining; however, the increase in speed comes at the expense of increased memory usage.

14.
Phys Rev E ; 95(1-1): 012322, 2017 Jan.
Article En | MEDLINE | ID: mdl-28208369

For many power-limited networks, such as wireless sensor networks and mobile ad hoc networks, maximizing the network lifetime is the first concern in the related designing and maintaining activities. We study the network lifetime from the perspective of network science. In our model, nodes are initially assigned a fixed amount of energy moving in a square area and consume the energy when delivering packets. We obtain four different traffic regimes: no, slow, fast, and absolute congestion regimes, which are basically dependent on the packet generation rate. We derive the network lifetime by considering the specific regime of the traffic flow. We find that traffic congestion inversely affects network lifetime in the sense that high traffic congestion results in short network lifetime. We also discuss the impacts of factors such as communication radius, node moving speed, routing strategy, etc., on network lifetime and traffic congestion.

15.
Nat Commun ; 8: 13928, 2017 01 09.
Article En | MEDLINE | ID: mdl-28067266

Identifying and quantifying dissimilarities among graphs is a fundamental and challenging problem of practical importance in many fields of science. Current methods of network comparison are limited to extract only partial information or are computationally very demanding. Here we propose an efficient and precise measure for network comparison, which is based on quantifying differences among distance probability distributions extracted from the networks. Extensive experiments on synthetic and real-world networks show that this measure returns non-zero values only when the graphs are non-isomorphic. Most importantly, the measure proposed here can identify and quantify structural topological differences that have a practical impact on the information flow through the network, such as the presence or absence of critical links that connect or disconnect connected components.

17.
Int J Bioinform Res Appl ; 10(1): 59-74, 2014.
Article En | MEDLINE | ID: mdl-24449693

Identification of targets, generally viruses or bacteria, in a biological sample is a relevant problem in medicine. Biologists can use hybridisation experiments to determine whether a specific DNA fragment, that represents the virus, is presented in a DNA solution. A probe is a segment of DNA or RNA, labelled with a radioactive isotope, dye or enzyme, used to find a specific target sequence on a DNA molecule by hybridisation. Selecting unique probes through hybridisation experiments is a difficult task, especially when targets have a high degree of similarity, for instance in a case of closely related viruses. After preliminary experiments, performed by a canonical Monte Carlo method with Heuristic Reduction (MCHR), a new combinatorial optimisation approach, the Space Pruning Monotonic Search (SPMS) method, is introduced. The experiments show that SPMS provides high quality solutions and outperforms the current state-of-the-art algorithms.


Algorithms , DNA Probes/genetics , DNA, Viral/genetics , Data Interpretation, Statistical , In Situ Hybridization/methods , Sequence Analysis, DNA/methods , Base Sequence , Molecular Sequence Data
18.
Methods Mol Biol ; 829: 593-603, 2012.
Article En | MEDLINE | ID: mdl-22231840

Mathematical sciences and computational methods have found new applications in fields like medicine over the last few decades. Modern data acquisition and data analysis protocols have been of great assistance to medical researchers and clinical scientists. Especially in psychiatry, technology and science have made new computational methods available to assist the development of predictive modeling and to identify diseases more accurately. Data mining (or knowledge discovery) aims to extract information from large datasets and solve challenging tasks, like patient assessment, early mental disease diagnosis, and drug efficacy assessment. Accurate and fast data analysis methods are very important, especially when dealing with severe psychiatric diseases like schizophrenia. In this paper, we focus on computational methods related to data analysis and more specifically to data mining. Then, we discuss some related research in the field of psychiatry.


Data Mining/methods , Mental Disorders/classification , Mental Disorders/diagnosis , Psychiatry/methods , Statistics as Topic/methods , Databases as Topic , Humans
19.
Artif Intell Med ; 53(2): 119-25, 2011 Oct.
Article En | MEDLINE | ID: mdl-21868208

OBJECTIVE: Accurate cell death discrimination is a time consuming and expensive process that can only be performed in biological laboratories. Nevertheless, it is very useful and arises in many biological and medical applications. METHODS AND MATERIAL: Raman spectra are collected for 84 samples of A549 cell line (human lung cancer epithelia cells) that has been exposed to toxins to simulate the necrotic and apoptotic death. The proposed data mining approach for the multiclass cell death discrimination problem uses a multiclass regularized generalized eigenvalue algorithm for classification (multiReGEC), together with a dimensionality reduction algorithm based on spectral clustering. RESULTS: The proposed algorithmic scheme can classify A549 lung cancer cells from three different classes (apoptotic death, necrotic death and control cells) with 97.78%± 0.047 accuracy versus 92.22 ± 0.095 without the proposed feature selection preprocessing. The spectrum areas depicted by the algorithm corresponds to the 〉C O bond from the lipids and the lipid bilayer. This chemical structure undergoes different change of state based on cell death type. Further evidence of the validity of the technique is obtained through the successful classification of 7 cell spectra that undergo hyperthermic treatment. CONCLUSIONS: In this study we propose a fast and automated way of processing Raman spectra for cell death discrimination, using a feature selection algorithm that not only enhances the classification accuracy, but also gives more insight in the undergoing cell death process.


Algorithms , Cell Death , Neoplasms/pathology , Apoptosis , Gene Expression Profiling/methods , Humans , Lung Neoplasms/pathology , Reproducibility of Results
20.
Neurosci Lett ; 499(1): 47-51, 2011 Jul 15.
Article En | MEDLINE | ID: mdl-21624430

Recent studies have shown that aging, psychiatric and neurologic diseases, and dopaminergic blockade all result in altered brain network efficiency. We investigated the efficiency of human brain functional networks as measured by fMRI in individuals with idiopathic Parkinson's disease (N=14) compared to healthy age-matched controls (N=15). Functional connectivity between 116 cortical and subcortical regions was estimated by wavelet correlation analysis in the frequency interval of 0.06-0.12 Hz. Efficiency of the associated network was analyzed, comparing PD to healthy controls. We found that individuals with Parkinson's disease had a marked decrease in nodal and global efficiency compared to healthy age-matched controls. Our results suggest that algorithmic approach and graph metrics might be used to identify and track neurodegenerative diseases, however more studies will be needed to evaluate utility of this type of analysis for different disease states.


Brain Waves/physiology , Magnetic Resonance Imaging/methods , Nerve Net/physiopathology , Parkinson Disease/diagnosis , Aged , Female , Humans , Male , Middle Aged , Models, Neurological , Parkinson Disease/physiopathology
...